Adaptive Independent Metropolis – Hastings
نویسندگان
چکیده
We propose an adaptive independent Metropolis–Hastings algorithm with the ability to learn from all previous proposals in the chain except the current location. It is an extension of the independent Metropolis–Hastings algorithm. Convergence is proved provided a strong Doeblin condition is satisfied, which essentially requires that all the proposal functions have uniformly heavier tails than the stationary distribution. The proof also holds if proposals depending on the current state are used intermittently, provided the information from these iterations is not used for adaption. The algorithm gives samples from the exact distribution within a finite number of iterations with probability arbitrarily close to 1. The algorithm is particularly useful when a large number of samples from the same distribution is necessary, like in Bayesian estimation, and in CPU intensive applications like, for example, in inverse problems and optimization.
منابع مشابه
Adaptive Independent Metropolis-Hastings by Fast Estimation of Mixtures of Normals
Adaptive Metropolis-Hastings samplers use information obtained from previous draws to tune the proposal distribution. The tuning is carried out automatically, often repeatedly, and continues after the burn-in period. Because the resulting chain is not Markovian, adaptation needs to be done carefully to ensure convergence to the correct ergodic distribution. In this paper we distill recent theor...
متن کاملOn the ergodicity properties of someadaptive MCMC algorithms
In this paper we study the ergodicity properties of some adaptive Markov chain Monte Carlo algorithms (MCMC) that have been recently proposed in the literature. We prove that under a set of verifiable conditions, ergodic averages calculated from the output of a so-called adaptive MCMC sampler converge to the required value and can even, under more stringent assumptions, satisfy a central limit ...
متن کاملAdaptive Independent Metropolis–hastings 1
We propose an adaptive independent Metropolis–Hastings algorithm with the ability to learn from all previous proposals in the chain except the current location. It is an extension of the independent Metropolis–Hastings algorithm. Convergence is proved provided a strong Doeblin condition is satisfied, which essentially requires that all the proposal functions have uniformly heavier tails than th...
متن کاملOutput-Sensitive Adaptive Metropolis-Hastings for Probabilistic Programs
We introduce an adaptive output-sensitive Metropolis-Hastings algorithm for probabilistic models expressed as programs, Adaptive Lightweight Metropolis-Hastings (AdLMH). The algorithm extends Lightweight Metropolis-Hastings (LMH) by adjusting the probabilities of proposing random variables for modification to improve convergence of the program output. We show that AdLMH converges to the correct...
متن کاملOptimal Proposal Distributions and Adaptive MCMC
We review recent work concerning optimal proposal scalings for Metropolis-Hastings MCMC algorithms, and adaptive MCMC algorithms for trying to improve the algorithm on the fly.
متن کامل